354 research outputs found

    Conquest of the four quarters : traditional accounts of the life of Śaṅkara

    Get PDF
    Some seven hundred years after Sankara wrote the learned commentaries that established his reputation as one of the foremost interpreters of Vedanta, a series of hagiographies began to emerge which glorified him as an incarnation of Siva. Although they were composed exclusively in Sanskrit, these works eventually secured him a place in popular culture. One text in particular stands out from the rest, the Sankaradigvijaya of Madhava. This work, composed between 1650 and 1800, skilfully brought together materials from several earlier hagiographies. Its popularity grew to such an extent that it came to eclipse the other works, which have languished in relative obscurity ever since. These hagiographies, along with the Saiikaradigvijaya, have been virtually ignored by critical scholars because they are of little historical value. Yet, the authors of these works had no intention of writing history. They sought to deify Sruikara and, to this end, mythography was a far more potent medium than biography. In this study historiographical concerns are largely left aside in focusing on the hagiographies composed prior to and including the Saiikaradigvijaya, i.e., eight texts in all. My primary aim is to consider how Sruikara has been received in India, and in particular to examine the conceptual models upon which his life story is constructed. The thesis is organized along the lines of the features that stand out most prominently in the hagiographies. Firstly, there are the mythic structures which provide not only the peaks but also the foundation of the narrative. The Sankara story is cast firmly within the framework of Saiva mythology: the protagonist is, above all, an avatara of Siva. Secondly, I have attached considerable importance to the sense of place. Sankara's grand tour of the sacred sites lends cohesion and continuity to the narrative. Ultimately his journey proves to be a quest for the throne of omniscience. Thirdly, there are the great debates which culminate in a digvijaya. Saiikara's conquest of the four quarters, along with his ascension to the throne of omniscience, highlights the complementarity of royal and ascetic values in traditional India. It is through the digvijaya that Sankara fulfills his mission of restoring harmony to a divided land, and so becomes a national hero. Finally, I have paid much attention to the legacy of Sankara as well as the continuity of the Advaita sampradaya, in order to emphasize that theirs is a living tradition

    Stem cell mechanobiology

    No full text
    Stem cells are undifferentiated cells that are capable of proliferation, self-maintenance and differentiation towards specific cell phenotypes. These processes are controlled by a variety of cues including physicochemical factors associated with the specific mechanical environment in which the cells reside. The control of stem cell biology through mechanical factors remains poorly understood and is the focus of the developing field of mechanobiology. This review provides an insight into the current knowledge of the role of mechanical forces in the induction of differentiation of stem cells. While the details associated with individual studies are complex and typically associated with the stem cell type studied and model system adopted, certain key themes emerge. First, the differentiation process affects the mechanical properties of the cells and of specific subcellular components. Secondly, that stem cells are able to detect and respond to alterations in the stiffness of their surrounding microenvironment via induction of lineage-specific differentiation. Finally, the application of external mechanical forces to stem cells, transduced through a variety of mechanisms, can initiate and drive differentiation processes. The coalescence of these three key concepts permit the introduction of a new theory for the maintenance of stem cells and alternatively their differentiation via the concept of a stem cell 'mechano-niche', defined as a specific combination of cell mechanical properties, extracellular matrix stiffness and external mechanical cues conducive to the maintenance of the stem cell population.<br/

    Ruya: Memory-Aware Iterative Optimization of Cluster Configurations for Big Data Processing

    Full text link
    Selecting appropriate computational resources for data processing jobs on large clusters is difficult, even for expert users like data engineers. Inadequate choices can result in vastly increased costs, without significantly improving performance. One crucial aspect of selecting an efficient resource configuration is avoiding memory bottlenecks. By knowing the required memory of a job in advance, the search space for an optimal resource configuration can be greatly reduced. Therefore, we present Ruya, a method for memory-aware optimization of data processing cluster configurations based on iteratively exploring a narrowed-down search space. First, we perform job profiling runs with small samples of the dataset on just a single machine to model the job's memory usage patterns. Second, we prioritize cluster configurations with a suitable amount of total memory and within this reduced search space, we iteratively search for the best cluster configuration with Bayesian optimization. This search process stops once it converges on a configuration that is believed to be optimal for the given job. In our evaluation on a dataset with 1031 Spark and Hadoop jobs, we see a reduction of search iterations to find an optimal configuration by around half, compared to the baseline.Comment: 9 pages, 5 Figures, 3 Tables; IEEE BigData 2022. arXiv admin note: substantial text overlap with arXiv:2206.1385

    Leveraging Reinforcement Learning for Task Resource Allocation in Scientific Workflows

    Full text link
    Scientific workflows are designed as directed acyclic graphs (DAGs) and consist of multiple dependent task definitions. They are executed over a large amount of data, often resulting in thousands of tasks with heterogeneous compute requirements and long runtimes, even on cluster infrastructures. In order to optimize the workflow performance, enough resources, e.g., CPU and memory, need to be provisioned for the respective tasks. Typically, workflow systems rely on user resource estimates which are known to be highly error-prone and can result in over- or underprovisioning. While resource overprovisioning leads to high resource wastage, underprovisioning can result in long runtimes or even failed tasks. In this paper, we propose two different reinforcement learning approaches based on gradient bandits and Q-learning, respectively, in order to minimize resource wastage by selecting suitable CPU and memory allocations. We provide a prototypical implementation in the well-known scientific workflow management system Nextflow, evaluate our approaches with five workflows, and compare them against the default resource configurations and a state-of-the-art feedback loop baseline. The evaluation yields that our reinforcement learning approaches significantly reduce resource wastage compared to the default configuration. Further, our approaches also reduce the allocated CPU hours compared to the state-of-the-art feedback loop by 6.79% and 24.53%.Comment: Paper accepted in 2022 IEEE International Conference on Big Data Workshop BPOD 202

    Predicting Dynamic Memory Requirements for Scientific Workflow Tasks

    Full text link
    With the increasing amount of data available to scientists in disciplines as diverse as bioinformatics, physics, and remote sensing, scientific workflow systems are becoming increasingly important for composing and executing scalable data analysis pipelines. When writing such workflows, users need to specify the resources to be reserved for tasks so that sufficient resources are allocated on the target cluster infrastructure. Crucially, underestimating a task's memory requirements can result in task failures. Therefore, users often resort to overprovisioning, resulting in significant resource wastage and decreased throughput. In this paper, we propose a novel online method that uses monitoring time series data to predict task memory usage in order to reduce the memory wastage of scientific workflow tasks. Our method predicts a task's runtime, divides it into k equally-sized segments, and learns the peak memory value for each segment depending on the total file input size. We evaluate the prototype implementation of our method using workflows from the publicly available nf-core repository, showing an average memory wastage reduction of 29.48% compared to the best state-of-the-art approac

    Lotaru: Locally Predicting Workflow Task Runtimes for Resource Management on Heterogeneous Infrastructures

    Full text link
    Many resource management techniques for task scheduling, energy and carbon efficiency, and cost optimization in workflows rely on a-priori task runtime knowledge. Building runtime prediction models on historical data is often not feasible in practice as workflows, their input data, and the cluster infrastructure change. Online methods, on the other hand, which estimate task runtimes on specific machines while the workflow is running, have to cope with a lack of measurements during start-up. Frequently, scientific workflows are executed on heterogeneous infrastructures consisting of machines with different CPU, I/O, and memory configurations, further complicating predicting runtimes due to different task runtimes on different machine types. This paper presents Lotaru, a method for locally predicting the runtimes of scientific workflow tasks before they are executed on heterogeneous compute clusters. Crucially, our approach does not rely on historical data and copes with a lack of training data during the start-up. To this end, we use microbenchmarks, reduce the input data to quickly profile the workflow locally, and predict a task's runtime with a Bayesian linear regression based on the gathered data points from the local workflow execution and the microbenchmarks. Due to its Bayesian approach, Lotaru provides uncertainty estimates that can be used for advanced scheduling methods on distributed cluster infrastructures. In our evaluation with five real-world scientific workflows, our method outperforms two state-of-the-art runtime prediction baselines and decreases the absolute prediction error by more than 12.5%. In a second set of experiments, the prediction performance of our method, using the predicted runtimes for state-of-the-art scheduling, carbon reduction, and cost prediction, enables results close to those achieved with perfect prior knowledge of runtimes

    How Workflow Engines Should Talk to Resource Managers: A Proposal for a Common Workflow Scheduling Interface

    Full text link
    Scientific workflow management systems (SWMSs) and resource managers together ensure that tasks are scheduled on provisioned resources so that all dependencies are obeyed, and some optimization goal, such as makespan minimization, is fulfilled. In practice, however, there is no clear separation of scheduling responsibilities between an SWMS and a resource manager because there exists no agreed-upon separation of concerns between their different components. This has two consequences. First, the lack of a standardized API to exchange scheduling information between SWMSs and resource managers hinders portability. It incurs costly adaptations when a component should be replaced by another one (e.g., an SWMS with another SWMS on the same resource manager). Second, due to overlapping functionalities, current installations often actually have two schedulers, both making partial scheduling decisions under incomplete information, leading to suboptimal workflow scheduling. In this paper, we propose a simple REST interface between SWMSs and resource managers, which allows any SWMS to pass dynamic workflow information to a resource manager, enabling maximally informed scheduling decisions. We provide an exemplary implementation of this API for Nextflow as an SWMS and Kubernetes as a resource manager. Our experiments with nine real-world workflows show that this strategy reduces makespan by up to 25.1% and 10.8% on average compared to the standard Nextflow/Kubernetes configuration. Furthermore, a more widespread implementation of this API would enable leaner code bases, a simpler exchange of components of workflow systems, and a unified place to implement new scheduling algorithms.Comment: Paper accepted in: 2023 23rd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid
    corecore